76 research outputs found

    Architectures of Cloud-enabled Cyber Physical Systems — a Systematic Mapping Study

    Get PDF
    Cloud-enabled Cyber Physical Systems (CCPS) combine embedded systems with highly scalable cloud services. Such systems provide opportunities to offload computing or data analytics tasks which require more resources than an embedded device can offer. The development of a CCPS involves multiple stakeholders as well as engineers and developers from different disciplines, which makes the description and communication of the system architecture a challenging task. Additionally, the architecture design of CCPS has the inherent challenge to determine which functionality should be placed on the device, in the cloud, or on a possible fog/edge device within or close to the system. This systematic mapping study evaluates how CCPS architectures are discussed in the current literature and which topics are associated with cloud computing in CCPS architectures. The results show a significant increase in CCPS publications over the last years, a focus on a specific architectural viewpoint and application areas, and a potential misalignment with the common understanding of cloud computing as a paradigm

    Software Sustainability in the Age of Everything as a Service

    Get PDF
    The need for acknowledging and managing sustainability as an essential quality of software systems has been steadily increasing over the past few years, in part as a reaction to the implications of ``software eating the world\u27\u27. Especially the widespread adoption of the Everything as a Service (*aaS) model of delivering software and (virtualized) hardware through cloud computing has put two sustainability dimensions upfront and center. On the one hand, services must be sustainable on a technical level by ensuring continuity of operations for both providers and consumers despite, or even better, while taking into account their evolution. On the other hand, the prosuming of services must also be financially sustainable for the involved stakeholders

    Improving hardware/software interface management in systems of systems through documentation as code

    Get PDF
    ContextThe management of Interface Control Documents (ICDs) has shown to be a major pain point in the architecting processes of Systems of Systems (SoS).ObjectiveThis work aims to improve on previously identified ICD management issues using the documentation-as-code philosophy as a potential basis for a treatment, and in collaboration with practitioners.MethodWe conducted a Technical Action Research (TAR) study with a group of engineers at the Netherlands Radio Astronomy Institute (ASTRON), in the context of the LOFAR radio telescope. An additional research instrument, in the form of an expert panel, was used to evaluate the transferability of the proposed treatment to alternative domains.ResultsIn-depth insights on previously identified interface management issues were gained. Based on these insights a functional proof-of-concept was developed aimed at addressing these issues following the documentation-as-code principles. In addition to receiving overall positive reviews from practitioners and experts, further areas of improvement and transferability considerations for future work were identified.ConclusionsThe proposed approach, which to our knowledge has not been explored before in this context, is promising to address some of the recurring interfacing-related issues with directed SoS in multiple engineering domains. This could be done mainly by enforcing consistency and completeness on both text-based and formal elements of the ICDs, and turning ICDs into single sources of truth for the architecting processes of large scale SoS

    Exploring the cost and performance benefits of AWS Step Functions using a data processing pipeline

    Get PDF
    In traditional cloud computing, dedicated hardware is substituted by dynamically allocated, utility-oriented resources such as virtualized servers. While cloud services are following the pay-as-you-go pricing model, resources are billed based on instance allocation and not on the actual usage, leading the customers to be charged needlessly. In serverless computing, as exemplified by the Function-as-a-Service (FaaS) model where functions are the basic resources, functions are typically not allocated or charged until invoked or triggered. Functions are not applications, however, and to build compelling serverless applications they frequently need to be orchestrated with some kind of application logic. A major issue emerging by the use of orchestration is that it complicates further the already complex billing model used by FaaS providers, which in combination with the lack of granular billing and execution details offered by the providers makes the development and evaluation of serverless applications challenging. Towards shedding some light into this matter, in this work we extensively evaluate the state-of-the-art function orchestrator AWS Step Functions (ASF) with respect to its performance and cost. For this purpose we conduct a series of experiments using a serverless data processing pipeline application developed as both ASF Standard and Express workflows. Our results show that Step Functions using Express workflows are economical when running short-lived tasks with many state transitions. In contrast, Standard workflows are better suited for long-running tasks, offering in addition detailed debugging and logging information. However, even if the behavior of the orchestrated AWS Lambda functions influences both types of workflows, Step Functions realized as Express workflows get impacted the most by the phenomena affecting Lambda functions

    Using a Microbenchmark to Compare Function as a Service Solutions

    Get PDF
    The Function as a Service (FaaS) subtype of serverless computing provides the means for abstracting away from servers on which developed software is meant to be executed. It essentially offers an event-driven and scalable environment in which billing is based on the invocation of functions and not on the provisioning of resources. This makes it very attractive for many classes of applications with bursty workload. However, the terms under which FaaS services are structured and offered to consumers uses mechanisms like GB–seconds (that is, X GigaBytes of memory used for Y seconds of execution) that differ from the usual models for compute resources in cloud computing. Aiming to clarify these terms, in this work we develop a microbenchmark that we use to evaluate the performance and cost model of popular FaaS solutions using well known algorithmic tasks. The results of this process show a field still very much under development, and justify the need for further extensive benchmarking of these services

    System- and Software-level Architecting Harmonization Practices for Systems-of-Systems:An exploratory case study on a long-running large-scale scientific instrument

    Get PDF
    The problems caused by the gap between system- and software-level architecting practices, especially in the context of Systems of Systems where the two disciplines inexorably meet, is a well known issue with a disappointingly low amount of works in the literature dedicated to it. At the same time, organizations working on Systems of Systems have been developing solutions for closing this gap for many years now. This work aims to extract such knowledge from practitioners by studying the case of a large-scale scientific instrument, a geographically distributed radio telescope to be more specific, developed as a sequence of projects during the last two decades. As the means for collecting data for this study we combine online interviews with a virtual focus group of practitioners from the organization responsible for building the instrument. Through this process, we identify persisting problems and the best practices that have been developed to deal with them, together with the perceived benefits and drawbacks of applying the latter in practice. Some of our major findings include the need to avoid over-reliance on the flexibility of software to compensate for incomplete requirements, hidden assumptions, as well as late involvement of system architecting, and to facilitate the cooperation between the involved disciplines through dedicated architecting roles and the adoption of unifying practices and standards

    System and software architecting harmonization practices in ultra-large-scale systems of systems:A confirmatory case study

    Get PDF
    Context: The challenges posed by the architecting of System of Systems (SoS) has motivated a significant number of research efforts in the area. However, literature is lacking when it comes to the interplay between the disciplines involved in the architecting process, a key factor in addressing these challenges.Objective: This paper aims to contribute to this line of research by confirming and extending previously characterized architecting harmonization practices from Systems and Software Engineering, adopted in an ultra-large-scale SoS. Method: We conducted a confirmatory case study on the Square-Kilometre Array (SKA) project to evaluate and extend the findings of our exploratory case on the LOFAR/LOFAR2.0 radio-telescope projects. In doing so, a pre-study was conducted to map the findings of the previous study with respect to the SKA context. A survey was then designed, through which the views of 46 SKA engineers were collected and analyzed. Results: The study confirmed in various degrees the four practices identified in the exploratory case, and provided further insights about them, namely: (1) the friction between disciplines caused by long-term system requirements, and how they can be ameliorated through intermediate, short-term requirements; (2) the way design choices with a cross-cutting impact on multiple agile teams have an indirect impact on the system architecture; (3) how these design choices are often caused by the criteria that guided early system decomposition; (4) the seemingly recurrent issue with the lack of details about the dynamic elements of the interfaces; and (5) the use of machine-readable interface specifications for aligning hardware/software development processes

    Challenges for the comprehensive management of cloud services in a PaaS framework

    Full text link
    The 4CaaSt project aims at developing a PaaS framework that enables flexible definition, marketing, deployment and management of Cloud-based services and applications. The major innovations proposed by 4CaaSt are the blueprint and its lifecycle management, a one stop shop for Cloud services and a PaaS level resource management featuring elasticity. 4CaaSt also provides a portfolio of ready to use Cloud native services and Cloud-aware immigrant technologies
    • …
    corecore